261 research outputs found

    Validation of nonlinear PCA

    Full text link
    Linear principal component analysis (PCA) can be extended to a nonlinear PCA by using artificial neural networks. But the benefit of curved components requires a careful control of the model complexity. Moreover, standard techniques for model selection, including cross-validation and more generally the use of an independent test set, fail when applied to nonlinear PCA because of its inherent unsupervised characteristics. This paper presents a new approach for validating the complexity of nonlinear PCA models by using the error in missing data estimation as a criterion for model selection. It is motivated by the idea that only the model of optimal complexity is able to predict missing values with the highest accuracy. While standard test set validation usually favours over-fitted nonlinear PCA models, the proposed model validation approach correctly selects the optimal model complexity.Comment: 12 pages, 5 figure

    Eco-intelligent factories: Timescales for environmental decision support

    Get PDF
    Manufacturing decisions are currently made based on considerations of cost, time and quality. However there is increasing pressure to also routinely incorporate environmental considerations into the decision making processes. Despite the existence of a number of tools for environmental analysis of manu-facturing activities, there does not appear to be a structured approach for gener-ating relevant environmental information that can be fed into manufacturing decision making. This research proposes an overarching structure that leads to three approaches, pertaining to different timescales that enable the generation of environmental information, suitable for consideration during decision making. The approaches are demonstrated through three industrial case studies

    Stochastic Bundle Adjustment for Efficient and Scalable 3D Reconstruction

    Full text link
    Current bundle adjustment solvers such as the Levenberg-Marquardt (LM) algorithm are limited by the bottleneck in solving the Reduced Camera System (RCS) whose dimension is proportional to the camera number. When the problem is scaled up, this step is neither efficient in computation nor manageable for a single compute node. In this work, we propose a stochastic bundle adjustment algorithm which seeks to decompose the RCS approximately inside the LM iterations to improve the efficiency and scalability. It first reformulates the quadratic programming problem of an LM iteration based on the clustering of the visibility graph by introducing the equality constraints across clusters. Then, we propose to relax it into a chance constrained problem and solve it through sampled convex program. The relaxation is intended to eliminate the interdependence between clusters embodied by the constraints, so that a large RCS can be decomposed into independent linear sub-problems. Numerical experiments on unordered Internet image sets and sequential SLAM image sets, as well as distributed experiments on large-scale datasets, have demonstrated the high efficiency and scalability of the proposed approach. Codes are released at https://github.com/zlthinker/STBA.Comment: Accepted by ECCV 202

    The use of a physiologically based pharmacokinetic model to evaluate deconvolution measurements of systemic absorption

    Get PDF
    BACKGROUND: An unknown input function can be determined by deconvolution using the systemic bolus input function (r) determined using an experimental input of duration ranging from a few seconds to many minutes. The quantitative relation between the duration of the input and the accuracy of r is unknown. Although a large number of deconvolution procedures have been described, these routines are not available in a convenient software package. METHODS: Four deconvolution methods are implemented in a new, user-friendly software program (PKQuest, ). Three of these methods are characterized by input parameters that are adjusted by the user to provide the "best" fit. A new approach is used to determine these parameters, based on the assumption that the input can be approximated by a gamma distribution. Deconvolution methodologies are evaluated using data generated from a physiologically based pharmacokinetic model (PBPK). RESULTS AND CONCLUSIONS: The 11-compartment PBPK model is accurately described by either a 2 or 3-exponential function, depending on whether or not there is significant tissue binding. For an accurate estimate of r the first venous sample should be at or before the end of the constant infusion and a long (10 minute) constant infusion is preferable to a bolus injection. For noisy data, a gamma distribution deconvolution provides the best result if the input has the form of a gamma distribution. For other input functions, good results are obtained using deconvolution methods based on modeling the input with either a B-spline or uniform dense set of time points

    Optimisation of NMR dynamic models I. Minimisation algorithms and their performance within the model-free and Brownian rotational diffusion spaces

    Get PDF
    The key to obtaining the model-free description of the dynamics of a macromolecule is the optimisation of the model-free and Brownian rotational diffusion parameters using the collected R1, R2 and steady-state NOE relaxation data. The problem of optimising the chi-squared value is often assumed to be trivial, however, the long chain of dependencies required for its calculation complicates the model-free chi-squared space. Convolutions are induced by the Lorentzian form of the spectral density functions, the linear recombinations of certain spectral density values to obtain the relaxation rates, the calculation of the NOE using the ratio of two of these rates, and finally the quadratic form of the chi-squared equation itself. Two major topological features of the model-free space complicate optimisation. The first is a long, shallow valley which commences at infinite correlation times and gradually approaches the minimum. The most severe convolution occurs for motions on two timescales in which the minimum is often located at the end of a long, deep, curved tunnel or multidimensional valley through the space. A large number of optimisation algorithms will be investigated and their performance compared to determine which techniques are suitable for use in model-free analysis. Local optimisation algorithms will be shown to be sufficient for minimisation not only within the model-free space but also for the minimisation of the Brownian rotational diffusion tensor. In addition the performance of the programs Modelfree and Dasha are investigated. A number of model-free optimisation failures were identified: the inability to slide along the limits, the singular matrix failure of the Levenberg–Marquardt minimisation algorithm, the low precision of both programs, and a bug in Modelfree. Significantly, the singular matrix failure of the Levenberg–Marquardt algorithm occurs when internal correlation times are undefined and is greatly amplified in model-free analysis by both the grid search and constraint algorithms. The program relax (http://www.nmr-relax.com) is also presented as a new software package designed for the analysis of macromolecular dynamics through the use of NMR relaxation data and which alleviates all of the problems inherent within model-free analysis

    Planar methods and grossone for the Conjugate Gradient breakdown in nonlinear programming

    Get PDF
    This paper deals with an analysis of the Conjugate Gradient (CG) method (Hestenes and Stiefel in J Res Nat Bur Stand 49:409-436, 1952), in the presence of degenerates on indefinite linear systems. Several approaches have been proposed in the literature to issue the latter drawback in optimization frameworks, including reformulating the original linear system or recurring to approximately solving it. All the proposed alternatives seem to rely on algebraic considerations, and basically pursue the idea of improving numerical efficiency. In this regard, here we sketch two separate analyses for the possible CG degeneracy. First, we start detailing a more standard algebraic viewpoint of the problem, suggested by planar methods. Then, another algebraic perspective is detailed, relying on a novel recently proposed theory, which includes an additional number, namely grossone. The use of grossone allows to work numerically with infinities and infinitesimals. The results obtained using the two proposed approaches perfectly match, showing that grossone may represent a fruitful and promising tool to be exploited within Nonlinear Programming
    • …
    corecore